Fruit quality and health strongly affect market value, customer satisfaction, and overall farm yield. Manual inspection methods remain exceedingly labour intensive. They are fairly subjective and vulnerable to inconsistencies. This paper presents an automatic system that uses many advanced deep learning methods, specifically transfer learning via MobileNetV2, to categorize fruits by freshness or spoilage. Classical machine learning (ML) methods like Support Vector Machines, Random Forest, and K-Nearest Neighbours are comparatively analysed with modern Convolutional Neural Network (CNN) designs (like VGG16, MobileNetV1, and MobileNetV2). Many experimental evaluations reveal that MobileNetV2 not only provides substantially outstanding training accuracy but maintains consistently strong validation metrics. These findings underscore the model’s potential for deployment on mobile platforms as well as in real-time agricultural applications, thereby leading to better productivity along with reduced post-harvest losses.
Introduction
1. Background and Motivation
Agriculture is essential to food security and the economy, especially in regions heavily reliant on local produce. A major issue is post-harvest spoilage of fruits and vegetables, which leads to significant food waste and economic loss. According to the FAO, one-third of global food production (~1.3 billion tons) is wasted annually, much of it due to poor handling and storage.
2. Solution with AI and Deep Learning
Recent advances in Artificial Intelligence (AI), especially Deep Learning (DL) and Transfer Learning, offer promising solutions for automated fruit quality classification. This study used MobileNetV2—a lightweight and efficient convolutional neural network (CNN)—to classify apples, bananas, and oranges as fresh or rotten.
3. Methodology Overview
Dataset: 13,599 images (from Kaggle) organized into six classes (e.g., FreshApple, RottenBanana).
Preprocessing: Image resizing, normalization, and augmentation (rotation, flipping, brightness adjustment) to improve model generalization and reduce overfitting.
Model Training:
Transfer learning with MobileNetV2 pre-trained on ImageNet.
Comparison with other models: VGG16, MobileNetV1, and traditional ML methods (SVM, Random Forest, KNN).
Optimization using RMSProp optimizer for faster convergence and better generalization.
4. Model Performance
MobileNetV2 achieved 97% validation accuracy with high generalization and minimal overfitting.
Outperformed traditional models:
SVM: 85%
Random Forest: 88%
Other deep learning models like VGG16 and MobileNetV1 were also tested but showed signs of overfitting or higher resource demand.
5. Architectural Advantages of MobileNetV2
Utilizes inverted residuals and linear bottlenecks for:
Reduced computation
High feature reuse
Efficient depthwise separable convolutions
Optimized for mobile and real-time applications, making it ideal for deployment in resource-constrained agricultural settings.
Conclusion
The study proves the effectiveness of using MobileNetV2 with transfer learning for real-time fruit quality assessment. It combines accuracy, speed, and resource-efficiency, making it a viable tool for reducing food waste through early spoilage detection in agricultural supply chains.
References
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen, \"MobileNetV2: Inverted Residuals and Linear Bottlenecks,\" in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520, 2018.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"ImageNet classification with deep convolutional neural networks,\" in Proc. Adv. Neural Inf. Process. Syst. (NIPS), pp. 1097–1105, 2012.
[3] K. Simonyan and A. Zisserman, \"Very Deep Convolutional Networks for Large-Scale Image Recognition,\" in Proc. Int. Conf. on Learning Representations (ICLR), 2015.
[4] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016.
[5] T. Tieleman and G. Hinton, \"Lecture 6.5—RMSProp,\" COURSERA: Neural Networks for Machine Learning, 2012.
[6] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, \"How transferable are features in deep neural networks?\" in Proc. Adv. Neural Inf. Process. Syst. (NIPS), pp. 3320–3328, 2014.
[7] F. Chen, X. Zhang, and Y. Liu, \"Deep learning-based automated fruit quality inspection: A review,\" IEEE Access, vol. 7, pp. 80803–80813, 2019.
[8] A. Wolfert, L. Ge, C. Verdouw, and M. J. Bogaardt, \"Big Data in Smart Farming – A review,\" Agricultural Systems, vol. 153, pp. 69–80, 2017.
[9] P. Kamilaris and F. X. Prenafeta-Boldú, \"Deep learning in agriculture: A survey,\" Computers and Electronics in Agriculture, vol. 147, pp. 70–90, 2018.
[10] D. P. Kingma and J. Ba, \"Adam: A Method for Stochastic Optimization,\" in Proc. Int. Conf. on Learning Representations (ICLR), 2015.
[11] S. J. Pan and Q. Yang, \"A Survey on Transfer Learning,\" IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010.
[12] Y. LeCun, Y. Bengio, and G. Hinton, \"Deep learning,\" Nature, vol. 521, pp. 436–444, 2015.
[13] B. B. Shahi, S. Singh, and S. A. Hussain, \"Deep learning-based approach for classification of fruits using MobileNetV2,\" International Journal of Electrical and Computer Engineering (IJECE), vol. 12, no. 6, pp. 6514–6522, 2022.
[14] M. Chakraborty and D. Banerjee, \"Quality inspection of fruits using deep learning techniques,\" in Proc. Int. Conf. on Computational Intelligence and Machine Learning, pp. 123–129, 2021.
[15] H. Muresan and M. Oltean, \"Fruit recognition from images using deep learning,\" Acta Univ. Sapientiae, Informatica, vol. 10, no. 1, pp. 26–42, 2018.
[16] C. Shorten and T. M. Khoshgoftaar, \"A survey on image data augmentation for deep learning,\" Journal of Big Data, vol. 6, no. 1, pp. 1–48, 2019.
[17] O. Russakovsky et al., \"ImageNet Large Scale Visual Recognition Challenge,\" International Journal of Computer Vision (IJCV), vol. 115, pp. 211–252, 2015.
[18] K. Weiss, T. M. Khoshgoftaar, and D. Wang, \"A survey of transfer learning,\" Journal of Big Data, vol. 3, no. 1, pp. 1–40, 2016.
[19] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
[20] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, \"Densely connected convolutional networks,\" in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708, 2017.
[21] D. A. Zarate, J. W. Branch, and J. A. Reinoso-Gordo, \"A CNN-based approach for fruit quality classification using transfer learning,\" Computers and Electronics in Agriculture, vol. 213, p. 108085, 2024.
[22] M. Sokolova and G. Lapalme, \"A systematic analysis of performance measures for classification tasks,\" Information Processing and Management, vol. 45, no. 4, pp. 427–437, 2009.